-
Notifications
You must be signed in to change notification settings - Fork 29
CLOUDP-349087: Fix TLS disable + scale up test #490
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
MCK 1.5.0 Release NotesNew Features
Bug Fixes
|
| @pytest.mark.e2e_disable_tls_scale_up | ||
| def test_tls_is_disabled_and_scaled_up(replica_set: MongoDB): | ||
| replica_set.load() | ||
| replica_set["spec"]["members"] = 5 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The issue in the test is that it was doing the update in two steps (scale, and then disable tls), while the whole point is to change them at the same time. (on top of having a duplicate function name)
| // Check if TLS is being disabled. If so, we need to lock replicas at the current member count | ||
| // to prevent scaling during the TLS disable operation. This decision is made once here and | ||
| // applied to both the StatefulSet and OM automation config. | ||
| tlsWillBeDisabled, err := checkIfTLSWillBeDisabled(conn, rs, log) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what if we just block this with validation and say that it's not possible to change both member count and disabling TLS at the same time?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, after discussing it in DM, let's do that instead of adding complexity to the reconcile loop. This is not a common use case anyway.
# CLOUDP-347497 - Single cluster Replica Set Controller Refactoring ## Why this refactoring The single-cluster RS controller was mixing two concerns: - **Kubernetes stuff** (StatefulSets, pods, volumes) - **Ops Manager/MongoDB stuff** (MongoDB processes, replication config) This worked fine for single-cluster, but it's a problem when you think about multi-cluster: - Multi-cluster has **multiple StatefulSets** (one per cluster) but only **one logical ReplicaSet** in Ops Manager - The OM automation config doesn't care about how many K8s clusters you have or how the pods are deployed So we need to separate these layers properly. ## Main changes ### 1. Broke down the huge Reconcile() method Before: ~300 lines of inline logic in Reconcile() Now: ```go Reconcile() ├── reconcileMemberResources() // Handles all K8s resource creation │ ├── reconcileHostnameOverrideConfigMap() │ ├── ensureRoles() │ └── reconcileStatefulSet() // StatefulSet-specific logic isolated here │ └── buildStatefulSetOptions() // Builds STS configuration └── updateOmDeploymentRs() // Handles Ops Manager automation config updates ``` This makes it way easier to understand what's happening and matches the multi-cluster controller structure. ### 2. Removed StatefulSet dependency from OM operations Created new helper functions that work directly with MongoDB resources instead of StatefulSets: - `CreateMongodProcessesFromMongoDB()` - was using StatefulSet before - `BuildFromMongoDBWithReplicas()` - same - `WaitForRsAgentsToRegisterByResource()` - same These mirror the existing `...FromStatefulSet` functions but take MongoDB resources instead. **Why it matters:** The OM layer now only cares about the MongoDB resource definition, not how it's deployed in K8s. This makes the code work the same way for both single-cluster and multi-cluster. ### 3. Added publishAutomationConfigFirstRS checks Dedicated function for RS instead of using the shared one. Does not rely on a statefulset object. ## Important for review The ideal way to review this PR is to compare the new structure to the `mongodbmultireplicaset_controller.go`. The aim of the refactoring is to get the single cluster controller closer to it. Look at: - `reconcileMemberResources()` in both controllers - similar structure now - `updateOmDeploymentRs()` - no more StatefulSet dependency - New helper functions in `om/process` and `om/replicaset` - mirror existing patterns ## Bug found along the way The logic to handle **scale up + disable TLS at the same time** doesn't actually work properly and should be blocked by validation (see [draft PR #490](#490) for more details). ## Tests added Added tests for the new helper functions: - `TestCreateMongodProcessesFromMongoDB` - basic scenarios, scaling, custom domains, TLS, additional config - `TestBuildFromMongoDBWithReplicas` - integration test checking ReplicaSet structure and member options propagation - `TestPublishAutomationConfigFirstRS` - automation config publish logic with various TLS/auth scenarios
|
Closed in favor of #549 |
Summary
This test has been broken for a while. Because there were two functions with the name
test_tls_is_disabled_and_scaled_up, only one of them was running everytime.I think this scenario has low chance of existing in production, hence why we never had a ticket related to this bug.
On top of that it performed the update in two separate step, whereas to test this behaviour it should be done in one update.
I uncovered it as part of the larger refactoring of the controller. But to keep the PR scope reasonable, I extracted the related changes as they are self contained
The bug may exist in multi-cluster as well. This bug was first discovered in 2021: https://jira.mongodb.org/browse/CLOUDP-80768
The blocking mechanism will be implemented in a better way after the multi-cluster first refactor, since we will keep track of a global reconciler state, notably holding the target number of replicas for this reconciliation.
Proof of Work
In the commit (1d8847e) which just fixes the e2e test, it fails: evg task Showing that the reconciler had to be fixed as well.
The PR is correct if CI is green again.
Checklist
skip-changeloglabel if not needed